Search Results: "riku"

5 August 2014

Riku Voipio: Testing qemu 2.1 arm64 support

Qemu 2.1 was just released a few days ago, and is now a available on Debian/unstable. Trying out an (virtual) arm64 machine is now just a few steps away for unstable users:

$ sudo apt-get install qemu-system-arm
$ wget https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-arm64-disk1.img
$ wget https://cloud-images.ubuntu.com/trusty/current/unpacked/trusty-server-cloudimg-arm64-vmlinuz-generic
$ qemu-system-aarch64 -m 1024 -cpu cortex-a57 -nographic -machine virt -kernel trusty-server-cloudimg-arm64-vmlinuz-generic \
-append 'root=/dev/vda1 rw rootwait mem=1024M console=ttyAMA0,38400n8 init=/usr/lib/cloud-init/uncloud-init ds=nocloud ubuntu-pass=randomstring' \
-drive if=none,id=image,file=trusty-server-cloudimg-arm64-disk1.img \
-netdev user,id=user0 -device virtio-net-device,netdev=user0 -device virtio-blk-device,drive=image
[ 0.000000] Linux version 3.13.0-32-generic (buildd@beebe) (gcc version 4.8.2 (Ubuntu/Linaro 4.8.2-19ubuntu1) ) #57-Ubuntu SMP Tue Jul 15 03:52:14 UTC 2014 (Ubuntu 3.13.0-32.57-generic 3.13.11.4)
[ 0.000000] CPU: AArch64 Processor [411fd070] revision 0
...
-snip-
...
ubuntu@ubuntu:~$ cat /proc/cpuinfo
Processor : AArch64 Processor rev 0 (aarch64)
processor : 0
Features : fp asimd evtstrm
CPU implementer : 0x41
CPU architecture: AArch64
CPU variant : 0x1
CPU part : 0xd07
CPU revision : 0

Hardware : linux,dummy-virt
ubuntu@ubuntu:~$
The "init=/usr/lib/cloud-init/uncloud-init ds=nocloud ubuntu-pass=randomstring" is ubuntu cloud stuff that will set the ubuntu user password to "randomstring" - don't use "randomstring" literally there, if you are connected to internets... For more detailed writeup of using qemu-system-aarch64, check the excellent writeup from Alex Bennee.

8 May 2014

Riku Voipio: Arm builder updates

Debian has recently received a donation of 8 build machines from Marvell. The new machines come with Quad core MV78460 Armada XP CPU's, DDR3 DIMM slot so we can plug in more memory, and speedy sata ports. They replace the well served Marvell MV78200 based builders - ones that have been building debian armel since 2009. We are planning a more detailed announcement, but I'll provide a quick summary: The speed increase provided by MV78460 can viewed by comparing build times on selected builds since early april: Qemu build times. We can now build Qemu in 2h instead of 16h -8x faster than before! Certainly a substantial improvement, so impressive kit from Marvell! But not all packages gain this amount of speedup: webkitgtk build times. This example, webkitgtk, builds barely 3x faster. The explanation is found from debian/rules of webkitgkt:

# Parallel builds are unstable, see #714072 and #722520
# ifneq (,$(filter parallel=%,$(DEB_BUILD_OPTIONS)))
# NUMJOBS = $(patsubst parallel=%,%,$(filter parallel=%,$(DEB_BUILD_OPTIONS)))
# MAKEARGUMENTS += -j$(NUMJOBS)
# endif
The old builders are single-core[1], so the regardless of parallel building, you can easily max out the cpu. New builders will use only 1 of 4 cores without parallel build support in debian/rules. During this buildd cpu usage graph, we see most time only one CPU is consumed. So for fast package build times.. make sure your packages supports parallel building. For developers, abel.debian.org is porter machine with Armada XP. It has schroot's for both armel and armhf. set "DEB_BUILD_OPTIONS=parallel=4" and off you go. Finally I'd like to thank Thomas Petazzoni, Maen Suleiman, Hector Oron, Steve McIntyre, Adam Conrad and Jon Ward for making the upgrade happen. Meanwhile, we have unrelated trouble - a bunch of disks have broken within a few days apart. I take the warranty just run out... [1] only from Linux's point of view. - mv78200 has actually 2 cores, just not SMP or coherent. You could run an RTOS on the other core while you run Linux on the other.

21 February 2014

Riku Voipio: Where the armel buildd time went

Wanna-build, wanna-build, which packages spent most time on armel buildd's since beginning of 2013?

package sum(build_time)
-------------------------+--------------
libreoffice 114 09:16:34
linux 113 02:58:50
gcc-4.8 064 01:21:09
webkitgtk 059 19:09:27
acl2 043 16:40:50
gcc-4.7 028 14:03:53
iceweasel 026 19:02:13
gcc-snapshot 026 01:31:21
openjdk-7 020 02:41:53
php5 019 16:13:22
llvm-toolchain-3.3 017 19:05:38
qt4-x11 017 02:57:09
espresso 016 03:50:37
pypy 015 07:07:25
icedove 014 18:57:08
insighttoolkit4 014 17:16:43
qtbase-opensource-src 014 12:39:09
llvm-toolchain-3.4 012 03:06:15
mono 011 22:30:13
atlas 011 20:40:54
qemu 011 17:11:09
calligra 011 16:05:55
gnuradio 011 15:19:35
resiprocate 011 10:14:56
llvm-toolchain-snapshot 011 02:04:44
libav 010 13:52:03
python2.7 009 18:58:33
ghc 009 18:28:48
gnat-4.8 009 13:59:57
axiom 009 12:40:24
cython 009 00:47:04
openjdk-6 008 16:38:14
oce 008 10:29:20
eglibc 008 06:04:26
ppl 007 20:48:45
root-system 007 17:32:16
openturns 007 10:12:53
gcl 007 08:02:42
gcc-4.6 007 02:50:48
k3d 007 00:36:11
python3.3 007 00:25:42
llvm-toolchain-3.2 007 00:17:59
vtk 006 17:53:28
samba 006 17:17:27
mysql-workbench 006 14:36:46
kde-workspace 006 07:31:12
gmsh 006 04:32:42
psi-plus 006 04:30:08
octave 006 04:17:22
paraview 006 04:13:25
Timeformat is "days HH:MM:SS". Our ridiculously stable mv78x00 buildd's have served well, but has come to become let them rest. Now, to find out how many of these top time consuming packages can build with parallel make and are not doing so already.

20 December 2013

Riku Voipio: Replicant on Galaxy S3

I recently got my self and Galaxy S3 for testing out Replicant, an android image made out of only open source components. Why Galaxy S3? It is well supported in Replicant, almost every driver is already open source. The hardware specs are acceptable, 1.4Ghz quad core, 1GB ram, microsd, and all the peripheral chips one expects for a phone. Galaxy S3 has sold insanely (50 million units supposedly), meaning I won't run out of accessories and aftermarket spare parts any time soon. The massive installed base also means a huge potential user community. S3 is still available as new, with two years of warranty. Why notWhile the S3 is still available new, it is safe to assume production is ending already - 1.5 year old product is ancient history in mobile world! It remains to be seen how much the massive user base will defend against the obsolescence. Upstream kernel support for "old" cpu is open question, replicant is still basing kernel on vendor kernel. Bootloader is unlocked, but it can't be changed due to trusted^Wtreacherous computing, preventing things like boot from sd card. Finally, not everything is open source, the GPU (mali) driver while being reverse engineered, is taking it's time - and the GPS hasn't been reversed yet. Installing replicant Before install, from the original installation, you might want to take a copy of firmware files (since replicant won't provide them). enable developer mode on the S3 and:
sudo apt-get install android-tools
mkdir firmware
adb pull /system/vendor/firmware/
adb pull /system/etc/wifi
After then, just follow official replicant install guide for S3. If you don't mind closed source firmwares, post-install you need to push the firmware files back:

adb shell
mount -o remount,rw /system
adb push . /system/vendor/firmware
Here was my first catch, the wifi firmwares from jelly bean based image were not compatible with older ICS based replicant. Using replicant
Booting to replicant is fast, few seconds to the pin screen. You are treated with the standard android lockscreen, usual slide/pin/pattern options are available. Basic functions like phone, sms and web browsing have icons from the homescreen and work without a hitch. Likewise camera seems to work, really the only smartphone feature missing is GPS. Sidenote - this image looks a LOT better on the S3 than on my thinkpad. No wonder people are flocking to phones and tablets when laptop makers use such crappy components.
The grid menu has the standard android AOSP opensource applications in the ICS style menu with the extra of f-droid icon - which is the installer for open source applications. F-droid is it's own project that complements replicant project by maintaining a catalog of Free Software.
F-droid brings hundreds of open source applications not only for replicant, but for any other android users, including platforms with android compatibility, such as Jolla's Sailfish OS. Of course f-droid client is open source, like the f-droid server (in Debian too). F-droid server is not just repository management, it can take care of building and deploying android apps.
The WebKit based android browser renders web sites without issues, and if you are not happy with, you can download Firefox from f-droid. Many websites will notice you are mobile, and provide mobile web sites, which is sometimes good and sometimes annoying. Worse, some pages detect you are android and only offer you to load their closed android app for viewing the page. OTOH I am already viewing their closed source website, so using closed source app to view it isn't much worse. This keyboard is again the android standard one, but for most unixy people the hacker's keyboard with arrow buttons and ctrl/alt will probably be the one you want. Closing thoughts While using replicant has been very smooth, the lack of GPS is becoming a deal-breaker. I could just copy the gpsd from cyanogen, like some have done, but it kind of beats the purpose of having replicant on the phone. So it might be that I move back to cyanogen, unless I find time to help reverse engineering the BCM4751 GPS.

22 July 2013

Riku Voipio: ACPI on ARM storm in teacup

A recent google+ post by Jon Masters caused some stormy and some less stormy responses. A lot of BIOS/UEFI/ACPI hate comes X86, where ACPI is used from everything from suspending devices to reading buttons and setting leds. So when X86 kernel suspends, it does magic calls to ACPI and prays that the firmware vendor did not screw it up. Now vendors do screw up, hence lots of cursing and ugly workarounds in the kernel follows. My Lenovo has a firwmare bug where the FN-buttons and Fan stops working if the laptop is attached on AC adapter for too long. The fan is probably a simple i2c device the kernel could control directly without jumping through ACPI hiding layer hoops. But the X86 people hold the view it is better to trust the firmware engineer to control devices instead of having the kernel folk to write device drivers to ... control devices! Now on ARM(64) the idea of using ACPI is to have none of that. Instead the idea is to use ACPI only to provide tables for enumerating what devices are available in the platform. Just like what device tree does. Now if this is the same as device tree, why bother? The main reason is to allow the distribution installer behave same on X86 / ARM / ARM64. This is crucial for distributions like fedora and RHEL where a cabal holds the point of view that X86 distribution development must not be constrained by ARM support. But it also important for everyone that the method of installing your favorite distribution to an ARM64 server is standard and works the same for any server from any vendor. Now while UEFI and ACPI are definitely not my preferred solutions, I can accept them as necessary evil for having a more standard platform.

4 February 2013

Riku Voipio: On behalf of aarch64 porters

Public service announcementWhen porting GNU/Linux applications to a new architecture, such as 64-Bit ARM, one gets familiar with the following error message:

checking build system type... x86_64-pc-linux-gnu
checking host system type... Invalid configuration aarch64-oe-linux': machine aarch64-oe' not recognized
configure: error: /bin/sh config.sub aarch64-oe-linux failed
This in itself is trivial to fix - run autoreconf or just copy in new versions of config.sub and config.guess. However, when bootstrapping a distribution of 12000+ packages, this becomes quickly tiresome. Thus we have a small request:

If you are an upstream of a software that uses autoconf - Please run autoreconf against autotools-dev 20120210.1 or later, and make a release of your software. Aarch64 porters will be grateful as updated software trickles down to distributions. This was the most discussed point during my FOSDEM talk "Porting applications to 64-Bit ARM".

27 October 2012

Riku Voipio: My 5 eurocents on the Rasberry Pi opengl driver debacle

The Linux community rather unenthusiastic about the open source shim to Raspberry PI GPU. I think the backslash is a bit unfair, even if the marketing from Raspberry was pompous - It is still a step in the right direction. More raspberry PI hardware enablement code is open source than before. A step in the wrong direction is "locking firmware", where a loadable firmware is made read-only by storing it to a ROM. It has been planned by the GTA04 phone project, and Raspberry foundation also considers making a locked GPU ROM device. This is madness that needs to stop. It may fulfill the letter of FSF guidelines, but it is firmly against the spirit. It does not increase freedom - it takes away vendors ability to provide bugfixes to firmware, and communities possibility of reverse engineering the firmware.

22 October 2012

Riku Voipio: F-droid store for android

Christopher complained that in android, you need a user account to install software. That is not true, there is F-droid which is a catalog of Free/Open Source Software, no user account needed. Lack of multitasking is yes annoying sometimes, yes, but that is really a poweruser problem. Most users don't load web pages in background while playing. Most users will simply not bother with web pages that take too long to load. Given the choice, most users will prefer a snappy UI over a UI that allows proper multitasking. N900 was never snappy, stutters were common, and it would often hang for long times if you had too many apps or browser windows open at the same time. Even for Android there are way more complaints about non-fluid UI than lack of proper multitasking. It is thus only logical for the Android developers to concentrate in developing a snappier UI over improving multitasking. The fact that Android has given 400+ million Linux computers out in the hands of consumers is one of the MOST AMAZING THINGS ever. But it seems others on FOSS community seem to consider Android a rather unfortunate event, because it's not "Real Linux" or "100% free software" or "because it's not bug free". Err, like Maemo or MeeGo ever is any of those... It is fair to complain about usability quirks Android as enduser. But if you believe Free Software, you should also see the opportunity the open source parts of Android provides to you fix it to your needs. After all, the point of Free Software is that you don't need to depend on the upstream to fix you everything? Christoph complains that the stock email app is lacking. Android Email app is Open Source (Maemo and N9 email apps are not). Android email has been forked as K-9 Mail. The app lifecycle is fixable allowing you to return from browser to the mail you were at - at least a plenty of other apps manage to do that. The question is, are we consumers or creators. If we are just sitting waiting to Google provide us a perfect shrinkwrapped Android, we could just as well use iPhone or Windows phone. We should instead see Android as an opportunity - when it's 90% open source, fix the remaining closed source bits like open source 3D drivers and open source replacements for proprietary interfaces.

12 September 2012

Thibaut Girka: State of Multiarch Cross-toolchains three weeks after GSoC

Hi, It's been three weeks already since the end of the Google Summer of Code 2012, and I've been told I still haven't blogged about the state of multiarch cross-toolchains. So, here it is! Building multiarch cross-toolchains This section is mostly a rehash of my previous post. Cross-toolchains build properly for most architectures in Debian, with the exception of arches having clashing ld.so paths. The following instructions assume you are using Wheezy or unstable, and have applied my patches. Adding a foreign architecture You are going to use some packages meant for a different architecture than your native one. This means you'll have to tell dpkg about a foreign architecture, for instance, armel.
# dpkg --add-architecture armel
Then, you need to update apt's database.
# apt-get update
If one of the repository you use only supports a subset of your configured architecture (for instance, when working with an unofficial port), you will need to use arch restrictions in your sources.list (see the manpage). Cross-binutils Absolutely nothing has changed there, it works exactly the same way it did before the GSoC. That is, grab binutils' source package, set $TARGET to a GNU triplet or a Debian arch, and run dpkg-buildpackage.
$ apt-get source binutils
$ cd binutils-*/
$ TARGET=arm-linux-gnueabihf dpkg-buildpackage -uc -us -b
Cross-gcc Building cross-gcc is done the same way it was before the GSoC, except it uses cross-arch dependencies instead of mangled packages for foreign packages. This means you shouldn't use dpkg-cross at all, but use multiarch instead. Then, building the cross-compiler is as easy as running (with $target being a Debian arch name):
$ export DEB_CROSS_NO_BIARCH=yes
$ echo $target > debian/arch
$ debian/rules control
$ dpkg-buildpackage -uc -us -b
Biarch/multilib and multiarch do not really play well. Using some trickery, you might be able to build multilib cross-compilers, but I would recommend building two cross-compilers instead. You will of course need to install the required build-dependencies, some of which are packages for the target architecture, but that is no different from building any other Debian package, provided that multiarch is correctly configured. This will produce cross-compilers packages using multiarch paths and cross-arch deps, along with some binary packages for the target architecture. Those binary packages are cross-built versions of the runtime libs. They are not needed, as they should be identical to the natively-built ones present in the archive, but using them shouldn't be a problem. Installing the cross-toolchains Installing cross-binutils is straightforward, installing cross-gcc itself is, but installing cross-g++ is not. Indeed there is bug #678623: g++-4.7 depends on libstdc++6-4.7-dev, which is multiarch-unaware and depends on g++-4.7. Assuming you've compiled your cross-gcc using my patches, the cross-compiled libstdc++6-4.7-dev will not have these problems, but still won't be co-installable with your native libstdc++6-4.7-dev, since it's not M-A: same. The easiest way to work around this issue is probably to download the binary package, modify its control file to include Multi-Arch: same , and install the modified package. Using the cross-toolchains Once installed, using the cross-toolchains should be as easy as running $triplet-gcc-4.7 file.c . Note the -4.7 part. Indeed, the cross-compilers packages do not provide the $triplet-gcc symlink. This matches what is done with the native compiler pase, where the gcc symlink is not provided by the compiler's package, but by the gcc package. However, there is no cross-gcc package or anything like that yet, which means you have to use the full name, or provide the symlink yourself. Another issue is pkg-config. To know where to search for libraries, pkg-config has to be told which architecture you are compiling for. Autotools does that by calling $triplet-pkg-config instead of pkg-config if it exists. The pkg-config binary packages comes with a wrapper you can symlink to: /usr/share/pkg-config-crosswrapper . Conclusion Despite a few shortcomings (bug #678623, missing symlinks, multilib issues and clashing ld.so paths), multiarch cross-toolchains are already usable on Wheezy. There is still quite a lot of work left (like fixing #678623, splitting gcc-4.7 in two, or changing wanna-build, britney etc. to handle cross-arch deps) but hopefully this will be done for Jessie.

7 June 2012

Riku Voipio: Adventures in perlsembly

Some Debian packages are missing important optimizations. This one was noticed when comparing openssl benchmarks by monkeyiq to the results got on a pandaboard (OMAP 4430). Being a Cortex-A9, it should have clearly been same or faster than the N9 in the benchmark (OMAP 3630). And it was, except for AES benchmarks. Since AES is quite important, that seemed a bit odd. Turns out, that Debian/Ubuntu package had some hand-crafted arm assembler optimizations missing. Enable them, and the results with openssl speed command benchmark were quite nice:

The 'numbers' are in 1000s of bytes per second processed.

benchmark debian with patch +%
sha1 55836.67k -> 73599.08k +31.811%
aes-128 cbc 18451.11k -> 36305.34k +96.765%
aes-256 cbc 13552.30k -> 27108.31k +100.027%
sha256 20092.25k -> 43469.45k +116.349%
sha512 8052.74k -> 37194.28k +361.884 %
rsa 1024 1904.2v/S -> 3650.5v/s +91.708 %
Curiously, the assembler code is actually in perl files that output assembler code. This kind of code is affectionately called "perlsembly". A bug with patch has been filed, hopefully applied at the soonest.

14 May 2012

Riku Voipio: Mosh - better remote shell

In this age of 3d accelerated desktops and all that fancy stuff, one does not expect practical innovation happening in the remote terminal emulation area. But it has just happened. It is called Mosh, a shorthand for "Mobile Shell". What does it do better than ssh we have learned to love? It doesn't replace ssh, as it still borrows authentication from ssh. But that's cool, as you can keep your ssh authorized keys. Available in Debian unstable,testing and Backports today, and many other systems as well. Hopefully an Android client comes available soon, as the above mentioned advantages seem really tailored for android like mobile systems. Caveat: This is new stuff, and thus hasn't quite been proven to be secure.

27 April 2012

Riku Voipio

Cross Compiling with MultiArch Congrats to the Ubuntu Folk for the new LTS release. Incidentally this is also the first release where our work on MultiArch bears fruit. We can now cross-compile relatively easily without resorting to hacks like scratchbox, qemu chroot or dpkg-cross/xdeb. Lets show short practical guide on cross-building Qemu for armhf. The instructions refer to precise, but the goodiness is on the way to Debian as well. Biggest missing piece being the cross-compiler packages, which we have an Summer of Code project. The example is operated in a chroot to avoid potentially messing up your working machine. Qemu-linaro is not a shining example, as cross-building it doesn't work out of box. But that is educational, it allows me to show what kind of issues you can bump into, and how to fix them.

$ sudo debootstrap --variant=buildd precise /srv/precise
Edit the /srv/precise/etc/apt.sources.list to the following (replace amd64 with i386 if you made an i386 chroot)

deb [arch=amd64] http://archive.ubuntu.com/ubuntu precise main universe
deb [arch=armhf] http://ports.ubuntu.com/ubuntu-ports precise main universe
deb-src http://archive.ubuntu.com/ubuntu precise main universe
Edit the /srv/precise/etc/dpkg/dpkg.cfg.d/multiarch by adding the following line:

foreign-architecture armhf
Finally disable install of recommends by editing /srv/precise/etc/apt/apt.conf.d/10local:

APT::Install-Recommends "0";
APT::Install-Suggests "0";
Install the armhf crosscompiler the chroot

$ sudo chroot /srv/precise/
# unset LANG LANGUAGE
# mount -t proc proc /proc
# apt-get update
# apt-get install g++-arm-linux-gnueabihf pkg-config-arm-linux-gnueabihf
Get the sources and try to install the cross-buildeps:

# cd /tmp
# apt-get source qemu-linaro
# cd qemu-linaro-*
# apt-get build-dep -aarmhf qemu-linaro
As we see, the build-dep bombs out ugly, with a them of "perl" being unable to be installed. This is because apt-get can't figure out if we should install and armhf or amd64 version of perl. We don't use the required syntax yet in Build-Dep line, "perl:any", as dpkg and apt in previous released don't support it. Thus back porting would no longer be possible. One way to fix it, would be to drop perl build-dep, as perl is already pulled by other build-deps. But lets instead show howto install it the build-deps manually. First we build system build-deps, then target architecture ones:

# apt-get install debhelper texinfo
# apt-get install zlib1g-dev:armhf libasound2-dev:armhf libsdl1.2-dev:armhf libx11-dev:armhf libpulse-dev:armhf libncurses5-dev:armhf libbrlapi-dev:armhf libcurl4-gnutls-dev:armhf libgnutls-dev:armhf libsasl2-dev:armhf uuid-dev:armhf libvdeplug2-dev:armhf libbluetooth-dev:armhf
And try the build[1]:

# dpkg-buildpackage -aarmhf -B
Which sadly errors out. Turns out the cross-build support in debian/rules is broken. Instead of --cc we need to feed an --cross-prefix to the ./configure of qemu. Edit the debian/rules with replacing

- conf_arch += --cc=$(DEB_HOST_GNU_TYPE)-gcc --cpu=$(QEMU_CPU)
+ conf_arch += --cross-prefix=$(DEB_HOST_GNU_TYPE)-
Optional: Since we are cross-compiling from a multicore machine, lets also add parallel building support, by changing the override_dh_auto_build: rule to have --parallel flags in debian/rules as well:

override_dh_auto_build:
# system build
dh_auto_build -B system-build --parallel
ifeq ($(DEB_HOST_ARCH_OS),linux)
# user build
dh_auto_build -B user-build --parallel

# static user build
dh_auto_build -B user-static-build --parallel

Try the build again:

# export DEB_BUILD_OPTIONS=parallel=4 # I have dual-core hyperthreading machine, build with all four threads
# time dpkg-buildpackage -aarmhf -B
...
dpkg-buildpackage: binary only upload (no source included)

real 18m53.425s
user 65m42.570s
sys 2m50.383s
The Native build of qemu-linaro took 4h 11min. [1] You should not build as root - but to keep instructions short, I'm not explaining howto add and use a unprivileged user in a chroot. Do as I *say*, not as I *do*!

1 February 2012

Cyril Brulebois: dpkg with multiarch

(This page was edited a few times, details are available.) Grab it while it s hot! Thanks to the hard work of dpkg developers and many (generations of) developers, multiarch is becoming a reality. If you want to give it a try, install dpkg from experimental, add a foreign architecture, and start trying to install packages. Example on amd64:
# dpkg --add-architecture i386
# dpkg --print-foreign-architectures
i386
# apt-get update
[ lots of amd64 and i386 Packages files get downloaded ]
# apt-get install mksh:i386
Reading package lists... Done
Building dependency tree
Reading state information... Done
Suggested packages:
  ed:i386
The following NEW packages will be installed:
  mksh:i386
0 upgraded, 1 newly installed, 0 to remove and 9 not upgraded.
Need to get 414 kB of archives.
After this operation, 707 kB of additional disk space will be used.
Get:1 http://ftp.fr.debian.org/debian/ sid/main mksh i386 40.4-2 [414 kB]
Fetched 414 kB in 0s (664 kB/s)
Selecting previously unselected package mksh.
(Reading database ... 171933 files and directories currently installed.)
Unpacking mksh (from .../archives/mksh_40.4-2_i386.deb) ...
Processing triggers for menu ...
Processing triggers for man-db ...
Setting up mksh (40.4-2) ...
update-alternatives: using /bin/mksh to provide /bin/ksh (ksh) in auto mode.
Processing triggers for menu ...
# mksh
$ ldd $(which mksh)
linux-gate.so.1 =>  (0xf779c000)
libc.so.6 => /lib/i386-linux-gnu/i686/cmov/libc.so.6 (0xf75d6000)
libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xf75b9000)
/lib/ld-linux.so.2 (0xf779d000)
Of course, installing an i386 mksh package isn t exactly what multiarch is about. Dozens of packages have been patched already to add Multi-Arch fields, but until their (recursive) dependencies have been multiarch-ified, foreign packages can be uninstallable, as can be seen below, with the usual why is it uninstallable? hunt (shortened output):
# apt-get install psutils:i386
The following packages have unmet dependencies:
 psutils:i386 : Depends: libpaper1:i386 but it is not going to be installed
# apt-get install libpaper1:i386
The following packages have unmet dependencies:
 libpaper1:i386 : Depends: ucf:i386 (>= 0.28) but it is not installable
                  Recommends: libpaper-utils:i386 but it is not going to be installed
# apt-get install ucf:i386
E: Package 'ucf:i386' has no installation candidate
Another example, successful handling of the installation of a foreign package, while it s already installed with the host architecture:
# apt-get install xz-utils:i386
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  liblzma5:i386
Suggested packages:
  xz-lzma:i386
The following packages will be REMOVED:
  xz-utils
The following NEW packages will be installed:
  liblzma5:i386 xz-utils:i386
0 upgraded, 2 newly installed, 1 to remove and 9 not upgraded.
Need to get 440 kB of archives.
After this operation, 410 kB of additional disk space will be used.
Do you want to continue [Y/n]?
Get:1 http://ftp.fr.debian.org/debian/ sid/main liblzma5 i386 5.1.1alpha+20110809-3 [205 kB]
Get:2 http://ftp.fr.debian.org/debian/ sid/main xz-utils i386 5.1.1alpha+20110809-3 [235 kB]
Fetched 440 kB in 0s (478 kB/s)
dpkg: xz-utils: dependency problems, but removing anyway as you requested:
 dpkg depends on xz-utils.
 xz-lzma depends on xz-utils.
 dpkg-dev depends on xz-utils.
(Reading database ... 171952 files and directories currently installed.)
Removing xz-utils ...
Processing triggers for man-db ...
Selecting previously unselected package liblzma5:i386.
(Reading database ... 171908 files and directories currently installed.)
Unpacking liblzma5:i386 (from .../liblzma5_5.1.1alpha+20110809-3_i386.deb) ...
Selecting previously unselected package xz-utils.
Unpacking xz-utils (from .../xz-utils_5.1.1alpha+20110809-3_i386.deb) ...
Processing triggers for man-db ...
Setting up liblzma5:i386 (5.1.1alpha+20110809-3) ...
Setting up xz-utils (5.1.1alpha+20110809-3) ...
# dpkg -l xz-utils xz-utils:i386 'liblzma*:*'
[   ]
un  xz-utils            <none>
ii  xz-utils            5.1.1alpha+20110809-3
ii  liblzma5:amd64      5.1.1alpha+20110809-3
ii  liblzma5:i386       5.1.1alpha+20110809-3
# ldd $(which xz)
    linux-gate.so.1 =>  (0xf776b000)
    liblzma.so.5 => /usr/lib/i386-linux-gnu/liblzma.so.5 (0xf7724000)
    librt.so.1 => /lib/i386-linux-gnu/i686/cmov/librt.so.1 (0xf771b000)
    libpthread.so.0 => /lib/i386-linux-gnu/i686/cmov/libpthread.so.0 (0xf7701000)
    libc.so.6 => /lib/i386-linux-gnu/i686/cmov/libc.so.6 (0xf75a4000)
    /lib/ld-linux.so.2 (0xf776c000)
# zgrep -a '_("Activities")' gnome-shell-3.2.2.1.tar.xz
        this._label = new St.Label(  text: _("Activities")  );
zgrep is a shell script, but it calls xz, which is from the i386 package, and everything runs just fine. Running it through strace -f -e '', I discovered those messages, which I had never seen before:
[ Process PID=8900 runs in 32 bit mode. ]
[ Process PID=8899 runs in 64 bit mode. ]
[ Process PID=8900 runs in 32 bit mode. ]
[ Process PID=8899 runs in 64 bit mode. ]
 
What s next? We re late! It s time to check what happens with that dpkg package, report bugs, and convert more libraries! Please think of the kittens, and coordinate with the release team to make sure you don t delay a transition when uploading a shiny, multiarch-ified package. In a nutshell, if you received a patch from Steve Langasek or Riku Voipio, it s a good indication your package will be helpful quickly when it s multiarchified. Since zlib is directly depended on by 2000+ packages, #569697 was pinged right after the dpkg upload; but many other packages will need patches and heavy testing. Hurry up, the freeze is coming, we need to shake it up as soon as possible!

19 January 2012

Riku Voipio: CuBox


Just recently arrived from DHL, a solid-run CuBox. I guess nobody who knows me will be surprised when I tell it features an ARM cpu. Specifically an Marvell Armada 510. It features ARMv7 compatibility, with a slight twist of replacing NEON extensions with iWMMX extensions. On the boasting side Armada 510 promises 1080p video decoding and OpenGL ES graphics acceleration (closed source, unfortunately).
The tiny form factor of CuBox is pretty much more than the impressive amount of connectors included;

* Gigabit ethernet
* 2*USB
* eSATA
* HDMI out
* s/pdif optical audio out
* microSD slot
* microUSB serial/jtag port

The last item being important as it makes CuBox unbrickable.. Some will probably lament the lack of WiFi/Bluetooth, but get everything in one device ;). Besides, the USB slots are there to be filled..
Getting started was an slightly rough ride, as in the included Ubuntu (10.04 LTS), X refused to start. After wrongly suspecting that my Display was at fault, turned out the microSD included was slightly corrupted, and some critical contents of xkb-data package were garbage. After reinstall of that package, everything worked, including playing Big Buck Bunny in FullHD with totem.
Biggest disappointment so far is the non-mainline kernel, based on old 2.6.32.9. Some mainline support of Armada 510 exist, but will it work with the proprietary graphics code?

27 June 2011

Riku Voipio: QEMU with OpenGL ES acceleration

In the graphics side, the major differences between ARM and X86 systems is that on ARM 3D acceleration is done with OpenGL ES. It is mostly a subset of modern OpenGL used on X86 desktop machines. From QEMU point of view this could mostly be ignored, as OpenGL was mostly used on games and specialist applications. This has now been changing, as desktops and user interfaces have started OpenGL to render graphics. Without acceleration, these user interfaces become slower than slugs crossing a tarpit.

For this reason, MeeGo introduced OpenGL ES acceleration support to QEMU. With the lack of easily available MeeGo QEMU images and test setup, I've created a test setup for Linaro 11.05 image.

Running torus opengl es1 demo from mesa-demos inside QEMU.

10 April 2011

Cyril Brulebois: Getting replaced by scripts

As already blogged by Riku, getting replaced by script is really great! Until now, I ve had a crontab fetching my supporting-very-big-mails (up to 100MB or so) mailbox every few minutes, and looking into my =incoming-buildd/ maildir on a very regular fashion. With some simple mutt maildir hooks, replying to a Successful log would trigger extracting the changes file from there, setting some options, like PGP inline signing, and the mail would be ready to go back to the buildd. That part was just about being a GPG-signing monkey, so really not a funny part. Since we no longer have to worry about this boring and time-consuming task, I ve switched to a crontab firing up 4 times a day, and I try to deal with all incoming mails at once. Coupled with the new filters (e.g. on out-of-date packages on the buildd status page), I started using my time to file FTBFS bugs again, so that maintainers notice their packages fail to build without having to wait for a non-happening testing migration. After 10 days, the following mutt filter in =debian-bts/ lists 69 bugs:
~s "Bug#.*FTBFS" ~P ~d 01/04/2011-
(Subject contains Bug#.*FTBFS, mails from me, starting from 01/04/2011.) Figuring out whether that s due to another package s bug, an outdated chroot, a temporary glitch, etc. might take some time; that s why it s a bit hard to stay on top of things; and when the backlog grows up, motivation to go through this tedious task can be pretty low, especially when one sees repeated mistakes. I hope the amount of (possible) use cases for #620686 will decrease over time; instead, I d be very happy if maintainers could at least check what s mentioned in configure.ac/configure.in. Keeping an eye on its diff between upstream versions should be easy enough. ;)

4 April 2011

Riku Voipio: Getting replaced by scripts - and it feels good!

As mentioned on Phils blog, Debian buildd's have now capability of autosigning builds. Today when arriving home I was greeted by an empty feeling - There was no builds to sign. The long standing routine had come to an end.

Good system administrators constantly look how to replace themselves with scripts and automation. Bad ones build job security by owning manual processes than nobody else can master.

With signing now done by some code instead of us, I (and other buildd admins) have more time and energy to work on irregular tasks - upgrading chroots, helping developers with porting problems etc. Not to mention blogging and updating ones own packages to sid..

Meanwhile, there are still some issues needing solving - $HOME isn't available on all buildds, -volatile and binNMU's need still manual signing. Certainly not the smoothest update, but it is already making life easier.

26 March 2011

Philipp Kern: Debian ftpmaster Meeting Almost over

So since the last progress report I also got round to take a look at the following issues:
It was a productive hacking event for me, that for sure. But now it's almost over and they're actually stealing us an hour tonight. I would've liked to go home with less items on my to-do list, though (i.e. it just grew, it didn't shrink).

7 November 2010

Adnan Hodzic: Ubuntu Developer Summit (UDS-N) Debianized Summary

Last week I was on my first Ubuntu Developer Summit (UDS-N), I wanted to attended this summit for couple of reasons: 1. Developer, I m part of Debian Java, working on Eclipse IDE, and as you re developing Debian, you re automatically developing Ubuntu, no? Nerd 2. DebConf11 main organizer, I really wanted to see how it s all done on corporate level, for instance we re going with hotels instead of student dorms and so on. 3. Migration to Linux infrastructure, make contacts and have some talks on this topic, because after DebConf11 in Bosnia a lot of institution will most probably open doors to open solutions such as Linux. I went to this summit without any expectations or plans, just said I ll play it by ear. This years UDS was held in Orlando, Florida before Orlando I was planning to stay in NYC for couple of days, therefor my whole plan was to get to NYC, then Orlando then head back home Smile I Heart NYC I spent couple of days here with my cousins and friends, as always NYC never fails to surprise me, and I ll always stop by even if it s only for couple of days Wink I wanted to meet up with couple of Debian people as well, but unfortunately my schedule was too tight so Caribe Royale, Orlando Please note that this whole trip was crazy (in a good way) starting from Sarajevo; on my flight from NYC to Orlando I see a guy that s sitting in front of me is wearing Canonical Landscape tshirt, I approach him and it turns he was Jamu Kakar working on Landscape. Since my luggage was late we had conversations from Debian over Ubuntu to Java to everything pretty much. While we were still at the airport, we bump into two more people going to UDS. As we got to hotel, same moment we walked out of cab there he was Jorge Castro, was he there just to smoke a cigarette or whatever doesn t even matter, what matters is that even before walking into hotel building I knew where reception, my tower and bar was, everything I needed to know at that moment High Five! Hotel where everything was held was Caribe Royale and even though I looked at their website I thought there was no way it s all going to be that nice, to my delight it was exact replica. Place is All-Suite Hotel and Convention Center , which meant that both accomodation, venue and everything else was in one place which is Yes 15 minutes later after I checked in, I was at a bar talking to Jim Baker about Java, Jython and Python, exact things I wanted to talk about at that given moment. What I said in these last couple of paragraphs sounds pretty cool eh? Well to me this whole UDS was just like this, everybody was so accessible and open for discussion, from Jono Bacon walking around offering candy to UDS reception staff Michelle, Marianna and others who were always there whether you had a complaint, problem or a praise. I really should stop mentioning names, because I ll forget somebody, and since it s just not fair because I had a feeling everybody was just there for you, whatever it was you needed. Ubuntu Developer Summit (UDS-N) Ubuntu, Debian, DebConf, differences? Even though I said I ll play it by ear, from very beginning I thought I d be complete outsider, being part of Debian and all. Quite contrary, I was everything but outsider, and even more there was plenty of Debian people there, including current DPL Stefano Zacchiroli, Colin Watson, Riku Voipio just to name few. There was also a Debian Healt Check plenary. But overall, there are differences between DebConf and UDS. One of the things is that you can attend UDS remotely I m not sure how much is this doable on DebConf, but then again vast differences are that UDS is filled with BOF s (as we would call it in Debian) meetings of different teams, I attended a lot of these, and most of them end with some kind of conclusion. While each day there was (only?) an hour of lectures on various topics, which on DebConf we would characterize maybe even as speed talks . However, everything was high paced, and it all ended up on You think that s a problem? How do you think we could solve it? . I had a feeling even if I didn t say something out loud that someone would hear me and approach me to discuss it. Everything was straight forward, all in goal to resolve certain issue or a problem, which I absolutely loved. Clap Fun It wasn t all work, even though Orlando doesn t seem to be my kind of city from what I saw, it s way too magical for my taste Party There was a lot of places where you could go to enjoy yourself. Good people at Canonical even offered me free ticket to Disney World, there was a lot of destinations to visit and transpiration was also organized by Canonical. Unfortunately I missed the notorious UDS party on Friday because I had to head home early, but possibly I ll make it next year Razz Even though, hotel was that great and to be honest you didn t have to leave to look for party. My favorite was Universal City Walk, one night I decided to go to see Blue Man Group, best part of it was that I got pulled up on stage by one of the members and was part of one of the acts! ROTFL This is definitely something everyone should see during his life, one moment you re laughing like a lunatic, while other moment you re just having goose bumps from the performance itself. Amazing stuff. After all I wrote here along with the pictures, is there even anything else I need to say? Smile I believe that even if I didn t play it by ear, this whole trip would exceed all my expectations in every possible sense. Because that s really what happened, those three reasons I was going to UDS for? Far more happened than that. All I can say in the end is that I ll move closer to having a part in Ubuntu development, and yea, I ll try to be regular on UDS from now on Smile See you in Budapest? If all these photos weren t enough, you can find official group photo as well as personal set of photos can be found on: UDS-N photos by 2010 Sean Sosik-Hamor

20 August 2010

Riku Voipio: (unnofficial) Bits from ARM porters

Quite a few things have happened recently in the Debian/ARM land

ARM and Canonical have generously provided us with bunch of fast armel machines. Four of these are now as buildd's, bringing the total of armel buildd's to 7. In other words, armel should no longer lag behind other architectures when building unstable packages.

One of machines, abel.debian.org has been setup as a porter box. It is faster (around 3x) than the older porterbox (agricola). All Debian Developers have access to the porterboxes.

The new buildds have allowed us to enable more suites. Thanks to Philipp Kerns work, armel builds now experimental, lenny-volatile, lenny-backports as well as unstable/non-free. Especially if you are using stable Debian, access to backports and volatile should make life happier :)

Finally, the next big thing is Hardfloat ARM port, effort being lead by Konstantinos Margaritis. This doesn't mean that the armel port is going away. Majority of ARM cpus sold are still without FPU, so the softloaf port (armel) will still have a long life ahead. Meanwhile, the armhf port will provide a more optimal platform for people with bleeding edge ARM cores (ARMv7 + vfp). Some people have been unhappy with the new proposed new port, and various alternatives have been proposed. However, armhf is currently the only solution being actively worked on.

Update: thank canonical too

Next.

Previous.